OK
so welcome everybody to the AI lecture.
There's not going to be a quiz today.
I've announced that this morning.
We still have server troubles.
The server is essentially up again.
The only piece that is still missing
is the login and the Alea login and the learner model.
So I'm hoping for that to come back up in the next hours.
We'll see.
And so we have to postpone the quiz
until probably Thursday morning.
We probably understand that no login
no quiz sensibly.
OK.
We're in the process of optimizing
constraint propagation.
Remember
we first had a little bit of a problem
with search.
Search in the space of all solutions
or all partial solutions that we searched over systematically
to complete the assignments to total assignments, which
correspond to the solutions.
And now we're in the kind of going one step above that
namely searching the space of all constraint satisfaction
problem descriptions.
And if we tweak the descriptions in a way
that we're not changing the set of solutions, that makes us
safe
and that they become tighter
then we have something which solves the same problem
but has a faster solution finding process.
That's the game we're playing.
And just as we've used the notion of factored
representations to go from particular steps
to kind of multi-steps, which makes the whole thing very much
more effective
this next step has the potential
of making things even more efficient.
The first step basically got us from, I don't know,
10 queens to 1,000 queens just to have
a easy measure of performance.
This will, in many cases, give us
another couple of orders of magnitude of problem size.
We've already seen tightening via arc consistency
arc consistency being kind of a technique of shrinking
the domains such that we're approximating
the size of all constraint pairs.
And now we will look at decomposition techniques.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:00:00 Min
Aufnahmedatum
2025-12-02
Hochgeladen am
2025-12-03 21:19:21
Sprache
en-US